164 research outputs found
Embarrassingly Parallel Search
International audienceWe propose the Embarrassingly Parallel Search, a simple and efficient method for solving constraint programming problems in parallel. We split the initial problem into a huge number of independent subproblems and solve them with available workers (i.e., cores of machines). The decomposition into subproblems is computed by selecting a subset of variables and by enumerating the combinations of values of these variables that are not detected inconsistent by the propagation mechanism of a CP Solver. The experiments on satisfaction problems and on optimization problems suggest that generating between thirty and one hundred subproblems per worker leads to a good scalability. We show that our method is quite competitive with the work stealing approach and able to solve some classical problems at the maximum capacity of the multi-core machines. Thanks to it, a user can parallelize the resolution of its problem without modifying the solver or writing any parallel source code and can easily replay the resolution of a problem
On Approximating Restricted Cycle Covers
A cycle cover of a graph is a set of cycles such that every vertex is part of
exactly one cycle. An L-cycle cover is a cycle cover in which the length of
every cycle is in the set L. The weight of a cycle cover of an edge-weighted
graph is the sum of the weights of its edges.
We come close to settling the complexity and approximability of computing
L-cycle covers. On the one hand, we show that for almost all L, computing
L-cycle covers of maximum weight in directed and undirected graphs is APX-hard
and NP-hard. Most of our hardness results hold even if the edge weights are
restricted to zero and one.
On the other hand, we show that the problem of computing L-cycle covers of
maximum weight can be approximated within a factor of 2 for undirected graphs
and within a factor of 8/3 in the case of directed graphs. This holds for
arbitrary sets L.Comment: To appear in SIAM Journal on Computing. Minor change
Optimal General Matchings
Given a graph and for each vertex a subset of the
set , where denotes the degree of vertex
in the graph , a -factor of is any set such that
for each vertex , where denotes the number of
edges of incident to . The general factor problem asks the existence of
a -factor in a given graph. A set is said to have a {\em gap of
length} if there exists a natural number such that and . Without any restrictions the
general factor problem is NP-complete. However, if no set contains a gap
of length greater than , then the problem can be solved in polynomial time
and Cornuejols \cite{Cor} presented an algorithm for finding a -factor, if
it exists. In this paper we consider a weighted version of the general factor
problem, in which each edge has a nonnegative weight and we are interested in
finding a -factor of maximum (or minimum) weight. In particular, this
version comprises the minimum/maximum cardinality variant of the general factor
problem, where we want to find a -factor having a minimum/maximum number of
edges.
We present an algorithm for the maximum/minimum weight -factor for the
case when no set contains a gap of length greater than . This also
yields the first polynomial time algorithm for the maximum/minimum cardinality
-factor for this case
Identically self-blocking clutters
A clutter is identically self-blocking if it is equal to its blocker. We prove that every identically self-blocking clutter different from is nonideal. Our proofs borrow tools from Gauge Duality and Quadratic Programming. Along the way we provide a new lower bound for the packing number of an arbitrary clutter
Branching on multi-aggregated variables
open5siopenGamrath, Gerald; Melchiori, Anna; Berthold, Timo; Gleixner, Ambros M.; Salvagnin, DomenicoGamrath, Gerald; Melchiori, Anna; Berthold, Timo; Gleixner, Ambros M.; Salvagnin, Domenic
An In-Out Approach to Disjunctive Optimization
Abstract. Cutting plane methods are widely used for solving convex optimization problems and are of fundamental importance, e.g., to pro-vide tight bounds for Mixed-Integer Programs (MIPs). This is obtained by embedding a cut-separation module within a search scheme. The importance of a sound search scheme is well known in the Constraint Programming (CP) community. Unfortunately, the “standard ” search scheme typically used for MIP problems, known as the Kelley method, is often quite unsatisfactory because of saturation issues. In this paper we address the so-called Lift-and-Project closure for 0-1 MIPs associated with all disjunctive cuts generated from a given set of elementary disjunction. We focus on the search scheme embedding the generated cuts. In particular, we analyze a general meta-scheme for cutting plane algorithms, called in-out search, that was recently proposed by Ben-Ameur and Neto [1]. Computational results on test instances from the literature are presented, showing that using a more clever meta-scheme on top of a black-box cut generator may lead to a significant improvement
Terrestrial Implications of Cosmological Gamma-Ray Burst Models
The observation by the BATSE instrument on the Compton Gamma Ray Observatory
that gamma-ray bursts (GRBs) are distributed isotropically around the Earth but
nonuniformly in distance has led to the widespread conclusion that GRBs are
most likely to be at cosmological distances, making them the most luminous
sources known in the Universe. If bursts arise from events that occur in normal
galaxies, such as neutron star binary inspirals, then they will also occur in
our Galaxy about every hundred thousand to million years. The gamma-ray flux at
the Earth due to a Galactic GRB would far exceed that from even the largest
solar flares. The absorption of this radiation in the atmosphere would
substantially increase the stratospheric nitric oxide concentration through
photodissociation of N, greatly reducing the ozone concentration for
several years through NO catalysis, with important biospheric effects due
to increased solar ultraviolet flux. A nearby GRB may also leave traces in
anomalous radionuclide abundances.Comment: uuencoded, gzip-ed postscript; 6 pages; submitted to ApJ Letter
Maximum gradient embeddings and monotone clustering
Let (X,d_X) be an n-point metric space. We show that there exists a
distribution D over non-contractive embeddings into trees f:X-->T such that for
every x in X, the expectation with respect to D of the maximum over y in X of
the ratio d_T(f(x),f(y)) / d_X(x,y) is at most C (log n)^2, where C is a
universal constant. Conversely we show that the above quadratic dependence on
log n cannot be improved in general. Such embeddings, which we call maximum
gradient embeddings, yield a framework for the design of approximation
algorithms for a wide range of clustering problems with monotone costs,
including fault-tolerant versions of k-median and facility location.Comment: 25 pages, 2 figures. Final version, minor revision of the previous
one. To appear in "Combinatorica
- …